For more than 60 years, proponents of artificial intelligence have been divided—often acrimoniously—into two camps: autonomy vs. augmentation. Autonomy proponents advocate the digital replacement of human workers in the name of efficiency and Augmentation advocates argue that machines—no matter how intelligent they become—should serve as nothing more than tools for their human creators.
While I see some scenarios where autonomy is preferable—such as robots that dismantle explosive devices—I am generally in the augmentation camp. I have long admired Doug Engelbart, who invented the computer mouse to augment the way we communicated with digital devices. In general, I believe that machines should back us up rather than push us out.
In recent weeks, I have seen a new twist to the augmentation approach where instead of machines backing up people, people are backing up machines. For want of a better term, I am calling it Human-Augmented AI (HAAI) and both cases certainly comply with Engelbart’s original argument that people must always remain in the loop.
The two companies I talked with are about as different from each other as two tech companies can be—except that both are inserting humans into the loop, precisely where others are deleting them. In both cases, I see these strategies as providing competitive advantages. I also see what they’re doing as stellar examples of the future of work as well as such touchy-feely terms as ethics, empathy and common sense.
I think both products give these companies competitive advantages that will prove good for business. I also think it is good for the future of work and certain intangibles that people tend to value.
Haptic Hand Holding
Many AI developers acknowledge the flaws that exist today and keep promising that it will get better. Perhaps, someday, that online support chatbot will be a haptic hologram sitting next to you: It will hold your hand and comfort you, knowing your personal history and preferences. You will communicate and even pay by brainwave. Perhaps someday, but right now such qualities are as likely to be delivered as is the promise of Free Beer Tomorrow in an Irish Pub.
Good service today still seems to depend very much upon humans remaining in the loop. While AI can improve accuracy, productivity and even safety, old fashioned and imperfect human employees are often still needed to get customers, partners and clients what they want.
Let’s look at two companies as examples:
A Sentinel on Your Porch
Deep Sentinel is a Pleasanton, CA-based company providing the only home security system that uses sensor cameras with AI object recognition capability and deep learning to make you and your property safer than has previously been possible.
The company was co-founded in 2016 by David Selinger, a serial entrepreneur, looking for a new opportunity using AI. He was still searching when a friend and neighbor experienced a home burglary.
This neighbor had a similar home alarm system that Selly had, from the same well-known and trusted brand. Yes, there were humans in the loop in the form of sleepy-eyed “security guards” sitting in bleak rooms with walls covered by lists of false alarms from traditional burglar systems.
According to Selinger, Deep Sentinel is addressing the shortcomings of the traditional home security systems the startup is challenging:
- Most security “guards” are cheap help who are untrained in crime detection. According to Selinger, most employers don’t train them and don’t even do a police check on them.
- Home system alarms go off randomly all the time. They are almost always false, and the standard procedure is for guards to flip a reset button, then return to gazing blankly for the remainder of their shifts.
- The alarms almost never detect real burglars and if they do, police often don’t get alerted. If police are called, they usually don’t send a patrol car out to investigate. Selly estimates that the number of arrests generated by home alarms is pretty close to zero.
- Videos are recorded by ”dumb” home cameras and the results are put on thumb drives that homeowners give to the police. According to Selinger, the standard operating procedure is for the police to take the drive, drop it into a manila envelope, and place it in a file where it is unlikely to ever be viewed: By the time police get the drive, the crime is aging, and there’s little, if anything, police can do unless they happen to recognize that particular criminal which rarely occurs.
In short, as Selinger tells it, home alarms give homeowners a false sense of security. The signs warning that there’s a home alarm may deter a few marauding teenagers, but they do little to stop the real pro and they are ill-equipped to do so.
Real Cops Watching
Selly was frustrated by what his friend shared with him and he was disturbed that his home was not nearly as secure as he had thought. A true entrepreneurial hacker, he decided to improve the situation himself.
He jury-rigged a system in his own home—and in his friend’s home. It immediately showed better capabilities than a system from, say, ADT.
Other home security services did have humans in the loop, but they were underpaid, underqualified and generally apathetic. He opted to augment his new AI alarm system with veteran police officers.
He escorted me to a room in his Pleasanton headquarters and introduced me to a group of about a dozen serious-looking employees. Each turned, eyed me, gave a brief, noncommittal nod, and promptly returned to studying the screens before them.
Many years ago, when I was a cub reporter, I once spent a night in the back seat of a Boston Police patrol car. I was not on arrest but on assignment to report on what they did and how they operated. This group of Live Sentinels, as Selinger calls them, reminded me of what I experienced so many years ago. These were the sort of people I would trust to protect my home, family, pets and property.
Chats with Burglars
I was told that in the course of a week, the Deep Sentinel camera technologies will recognize usual residents, visitors or delivery people. The Live Sentinels verify these are people who should be outside a protected home.
But the Live Sentinels swing into action when the sensor cameras see possible porch pirates or suspicious intruders. I was told they once even spotted a prowling bear, which must be newsworthy in Pleasanton, about an hour’s drive from San Francisco.
Through the monitor system, they start talking to the trespasser, informing her or him that police have been called and they would be wise to depart immediately. If the intruder challenges the voice they are hearing, then the Live Sentinel describes the intruder’s appearance and clothing.
I would guess that’s enough to make the boldest thief take flight. The Live Sentinel provides police with a description and location of the perpetrator, making apprehension more likely than would an aging USB Key.
Selinger impressed me: The company has just started selling the systems and he already said there are several thousand customers. I had the feeling he had big plans and the wherewithal to make them happen. The company has secured a comfortable Series A round from a group that includes Jeff Bezos.
Innodata’s Human Innovation
On first glance, Innodata, Inc. (NASD: INOD) seems to have little in common with Deep Sentinel. Founded in 1988 and based in New York City, it is a global IT conglomerate with over 5000 employees, serving big enterprises in publishing, health, insurance and other industries. It is very much about big data.
Yet, Innodata is using AI very much in the same way that Deep Sentinel is using it and for the very same reason: While AI is enormously valuable to optimizing and automating processes, it lacks common sense and is agnostic on ethical issues.
I spoke with Rahul Singhal, Chief Product Officer. We talked more in general about the integration of AI and humans than we did about Innodata itself. Singhal’s a veteran of the early IBM Watson team, which has been under fire for overpromising what AI could achieve in healthcare. We talked about an “abundance of failures” in AI increasingly in the news.
Rahul also mentioned Forrester, who has predicted that this will be the year that customers start pushing back against the seemingly relentless advance of chatbots and salesforce automation.
“Everyone is saying that data is the new currency, but if it is– then AI is dirty currency,” he quipped.
This seemed to me to be a puzzling start with an executive making products for a publicly-traded big data company. But it turns out that Rahul is driving more than data: He is ensuring that humans be inserted into Innodata business loops. As far as I can determine, Innodata is the first and perhaps only IT company dedicated to keeping humans in the loop and he is doing it for reasons that are not unlike what Selinger is doing at Deep Sentinel.
Innodata is employing a global team of experts with medical, legal, and other specialized knowledge. They insert into the loop certain attributes such as ethics, empathy and experience.
These designated experts are like the ex-cops used by Sentinel. They can detect problems early and respond fast when the need is justified: They don’t just sit and watch like the human riding shotgun in an autonomous car. They provide oversight, correct and refine algorithms and essentially make the AI operate smarter.
For example, a court may issue a decision containing mostly unstructured data. Then legal experts step in to provide structure as well as context by performing valuable tasks that AI doesn’t yet do, such as citing other cases relevant to the judgment being scrutinized.
A similar approach is being used in the health industry, where a pharma client was reviewing a new AI technique. The Innodata medical experts found that data was not usable. By halting the project early, the pharma experienced considerable savings and perhaps human lives as well.